منابع مشابه
A Sparser Johnson-Lindenstrauss Transform
We give a Johnson-Lindenstrauss transform with column sparsity s = Θ(ε−1 log(1/δ)) into optimal dimension k = O(ε−2 log(1/δ)) to achieve distortion 1±ε with success probability 1−δ. This is the first distribution to provide an asymptotic improvement over the Θ(k) sparsity bound for all values of ε, δ. Previous work of [Dasgupta-Kumar-Sarlós, STOC 2010] gave a distribution with s = Õ(ε−1 log(1/δ...
متن کاملOn Using Toeplitz and Circulant Matrices for Johnson-Lindenstrauss Transforms
The Johnson-Lindenstrauss lemma is one of the corner stone results in dimensionality reduction. It says that given N , for any set of N vectors X ⊂ Rn, there exists a mapping f : X → Rm such that f(X) preserves all pairwise distances between vectors in X to within (1 ± ε) if m = O(ε lgN). Much effort has gone into developing fast embedding algorithms, with the Fast JohnsonLindenstrauss transfor...
متن کاملJohnson-Lindenstrauss notes
PrA∼D [∣∣‖Ax‖22 − 1∣∣ > ε] < δ. Lemma 2 implies Lemma 1, since we can set δ = 1/n2 then apply a union bound on the vectors (xi − xj)/‖xi − xj‖2 for all i < j. The first proof of Lemma 2 was given by [15]. Since then, several proofs have been given where D can be taken as a distribution over matrices with independent Gaussian or Bernoulli entries, or even more generally, Ω(log(1/δ))-wise indepen...
متن کاملThe Johnson-Lindenstrauss Lemma
Definition 1.1 Let N(0, 1) denote the one dimensional normal distribution. This distribution has density n(x) = e−x 2/2/ √ 2π. LetNd(0, 1) denote the d-dimensional Gaussian distribution, induced by picking each coordinate independently from the standard normal distribution N(0, 1). Let Exp(λ) denote the exponential distribution, with parameter λ. The density function of the exponential distribu...
متن کامل10 : The Johnson - Lindenstrauss Lemma ∗
In this chapter, we will prove that given a set P of n points in IR, one can reduce the dimension of the points to k = O(ε−2 log n) and distances are 1 ± ε reserved. Surprisingly, this reduction is done by randomly picking a subspace of k dimensions and projecting the points into this random subspace. One way of thinking about this result is that we are “compressing” the input of size nd (i.e.,...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of the ACM
سال: 2014
ISSN: 0004-5411,1557-735X
DOI: 10.1145/2559902